46 research outputs found
Preventing Algorithmic Bias in the Development of Algorithmic Decision-Making Systems: A Delphi Study
In this digital era, we encounter automated decisions made about or on behalf of us by the so called Algorithmic Decision-Making (ADM) systems. While ADM systems can provide promising business opportunities, their implementation poses numerous challenges. Algorithmic bias that can enter these systems may result in systematical discrimination and unfair decisions by favoring certain individuals over others. Several approaches have been proposed to correct erroneous decision-making in the form of algorithmic bias. However, proposed remedies have mostly dealt with identifying algorithmic bias after the unfair decision has been made rather than preventing it. In this study, we use Delphi method to propose an ADM systems development process and identify sources of algorithmic bias at each step of this process together with remedies. Our outputs can pave the way to achieve ethics-by-design for fair and trustworthy ADM systems
Identification and analysis of handovers in organisations using process model repositories
Purpose Identifying handovers is an important but difficult to achieve goal for companies as handovers have advantages allowing for specialisation in processes as well as disadvantages by creating erroneous interfaces. Design/methodology/approach Conceptualisation of a method based on theory and evaluation with company data using a process model repository. Findings The method allows to evaluate handovers from the perspective of roles in processes and grouping of employees in organisational units. It uses existing process model repositories connected with organisational chart information in companies to determine the density of handovers. The method is successfully evaluated using the example of a major telecommunications company with 1,010 process models in its repository. Practical implications Companies can determine on various levels, up to the overall organisational level, in which parts of the company efforts are best spent to manage handovers in an optimal way. Originality/value This paper is first in showing how handovers can be conceptualised and identified with a large-scale method
Ethical Risks, Concerns, and Practices of Affective Computing:A Thematic Analysis
The recent advances in artificial intelligence (AI) have drawn the attention of the public, policymakers, practitioners, and scientists to the ethical implications of AI. Affective computing is among the sensitive topics, as it deals with human emotions and affect. Research and applications in this field are perceived to raise substantial risks. In this study, we conducted a thematic analysis of the ethical impact statements of 70 papers that are accepted to be presented at the ACII conference. Our aim was to explore how the affective computing research community perceives risks and concerns related to ethics in this field, and how they attempt to address and mitigate these risks. We report our findings of this thematic analysis along with an evaluation of the potential impact of the regulations such as The EU AI Act on the field of affective computing
DETECTING ROLE INCONSISTENCIES IN PROCESS MODELS
Business process models capture crucial information about business operations. To overcome the challenge of maintaining process definitions in large process repositories, researchers have suggested methods to discover errors in the functional and the behavioral perspectives of process models. However, there is a gap in the literature on the detection of problems on the organizational perspective of process models, which is critical to manage the resources and the responsibilities within organizations. In this paper, we introduce an approach to automatically detect inconsistencies between activities and roles in process models. Our approach implements natural language processing techniques and enterprise semantics to identify ambiguous, redundant, and missing roles in textual descriptions. We applied our approach on the process model repository of a major telecommunication company. A quantitative evaluation of our approach with 282 real-life activities displayed that this approach can accurately discover role inconsistencies. Practitioners can achieve significant quality improvements in their process model repositories by applying the approach on process models complemented with textual descriptions
Interpretable Explainability in Facial Emotion Recognition and Gamification for Data Collection
Training facial emotion recognition models requires large sets of data and
costly annotation processes. To alleviate this problem, we developed a gamified
method of acquiring annotated facial emotion data without an explicit labeling
effort by humans. The game, which we named Facegame, challenges the players to
imitate a displayed image of a face that portrays a particular basic emotion.
Every round played by the player creates new data that consists of a set of
facial features and landmarks, already annotated with the emotion label of the
target facial expression. Such an approach effectively creates a robust,
sustainable, and continuous machine learning training process. We evaluated
Facegame with an experiment that revealed several contributions to the field of
affective computing. First, the gamified data collection approach allowed us to
access a rich variation of facial expressions of each basic emotion due to the
natural variations in the players' facial expressions and their expressive
abilities. We report improved accuracy when the collected data were used to
enrich well-known in-the-wild facial emotion datasets and consecutively used
for training facial emotion recognition models. Second, the natural language
prescription method used by the Facegame constitutes a novel approach for
interpretable explainability that can be applied to any facial emotion
recognition model. Finally, we observed significant improvements in the facial
emotion perception and expression skills of the players through repeated game
play.Comment: 8 pages, 8 figures, 2022 10th International Conference on Affective
Computing and Intelligent Interaction (ACII